A Learning Method by Stochastic Connection Weight Update

نویسندگان

  • Kazuyuki Hara
  • Yoshihisa Amakata
  • Ryohei Nukaga
  • Kenji Nakayama
چکیده

In this paper, we propose a learning method that updates a synaptic weight in probability which is proportional to an output error. Proposed method can reduce computational complexity of learning and at the same time, it can improve the classification ability. We point out that an example produces small output error does not contribute to update of a synaptic weight. As learning progresses, the number of the small error examples will be increasing compared to the big one is decreasing. This unbalance will cause of difficulty of learning large error examples. Proposed method cancels this phenomenon and improve the learning ability. Validity of proposed method is confirmed through computer simulation.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Momentum and Optimal Stochastic Search

The rate of convergence for gradient descent algorithms, both batch and stochastic, can be improved by including in the weight update a “momentum” term proportional to the previous weight update. Several authors [1, 2] give conditions for convergence of the mean and covariance of the weight vector for momentum LMS with constant learning rate. However stochastic algorithms require that the learn...

متن کامل

Stochastic Optimization and Machine Learning: Cross-Validation for Cross-Entropy Method

We explore using machine learning techniques to adaptively learn the optimal hyperparameters of a stochastic optimizer as it runs. Specifically, we investigate using multiple importance sampling to weight previously gathered samples of an objective function and combining with cross-validation to update the exploration / exploitation hyperparameter. We employ this on the Cross-Entropy method as ...

متن کامل

Using Curvature Information for Fast Stochastic Search Improving Stochastic Search

We present an algorithm for fast stochastic gradient descent that uses a nonlinear adaptive momentum scheme to optimize the late time convergence rate. The algorithm makes eeective use of curvature information, requires only O(n) storage and computation, and delivers convergence rates close to the theoretical optimum. We demonstrate the technique on linear and large nonlinear back-prop networks...

متن کامل

Path Integral Stochastic Optimal Control for Reinforcement Learning

Path integral stochastic optimal control based learning methods are among the most efficient and scalable reinforcement learning algorithms. In this work, we present a variation of this idea in which the optimal control policy is approximated through linear regression. This connection allows the use of well-developed linear regression algorithms for learning of the optimal policy, e.g. learning...

متن کامل

Target Detection in Bistatic Passive Radars by Using Adaptive Processing Based on Correntropy Cost Function

In this paper a novel method is introduced for target detection in bistatic passive radars which uses the concept of correntropy to distinguish correct targets from false detections. In proposed method the history of each cell of ambiguity function is modeled as a stochastic process. Then the stochastic processes consist the noise are differentiated from those consisting targets by constructing...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001